Tail Bounds for Stochastic Approximation
نویسندگان
چکیده
Stochastic-approximation gradient methods are attractive for large-scale convex optimization because they offer inexpensive iterations. They are especially popular in data-fitting and machine-learning applications where the data arrives in a continuous stream, or it is necessary to minimize large sums of functions. It is known that by appropriately decreasing the variance of the error at each iteration, the expected rate of convergence matches that of the underlying deterministic gradient method. Conditions are given under which this happens with overwhelming probability.
منابع مشابه
Stochastic bounds for a single server queue with general retrial times
We propose to use a mathematical method based on stochastic comparisons of Markov chains in order to derive performance indice bounds. The main goal of this paper is to investigate various monotonicity properties of a single server retrial queue with first-come-first-served (FCFS) orbit and general retrial times using the stochastic ordering techniques.
متن کاملOn the adjustment coefficient, drawdowns and Lundberg-type bounds for random walk
Consider a random walk whose (light-tailed) increments have positive mean. Lower and upper bounds are provided for the expected maximal value of the random walk until it experiences a given drawdown d. These bounds, related to the Calmar ratio in Finance, are of the form (exp{αd} − 1)/α and (K exp{αd} − 1)/α for some K > 1, in terms of the adjustment coefficient α (E[exp{−αX}] = 1) of the insur...
متن کاملOn the bounds in Poisson approximation for independent geometric distributed random variables
The main purpose of this note is to establish some bounds in Poisson approximation for row-wise arrays of independent geometric distributed random variables using the operator method. Some results related to random sums of independent geometric distributed random variables are also investigated.
متن کاملEfficiency Evaluation and Ranking DMUs in the Presence of Interval Data with Stochastic Bounds
On account of the existence of uncertainty, DEA occasionally faces the situation of imprecise data, especially when a set of DMUs include missing data, ordinal data, interval data, stochastic data, or fuzzy data. Therefore, how to evaluate the efficiency of a set of DMUs in interval environments is a problem worth studying. In this paper, we discussed the new method for evaluation and ranking i...
متن کاملParallelizing Stochastic Approximation Through Mini-Batching and Tail-Averaging
This work characterizes the benefits of averaging techniques widely used in conjunction with stochastic gradient descent (SGD). In particular, this work sharply analyzes: (1) mini-batching, a method of averaging many samples of the gradient to both reduce the variance of a stochastic gradient estimate and for parallelizing SGD and (2) tail-averaging, a method involving averaging the final few i...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2013